81 research outputs found

    Precision in the perception of direction of a moving pattern

    Get PDF
    The precision of the model of pattern motion analysis put forth by Adelson and Movshon (1982) who proposed that humans determine the direction of a moving plaid (the sum of two sinusoidal gratings of different orientations) in two steps is qualitatively examined. The volocities of the grating components are first estimated, then combined using the intersection of constraints to determine the velocity of the plaid as a whole. Under the additional assumption that the noise sources for the component velocities are independent, an approximate expression can be derived for the precision in plaid direction as a function of the precision in the speed and direction of the components. Monte Carlo simulations verify that the expression is valid to within 5 percent over the natural range of the parameters. The expression is then used to predict human performance based on available estimates of human precision in the judgment of single component speed. Human performance is predicted to deteriorate by a factor of 3 as half the angle between the wavefronts (theta) decreases from 60 to 30 deg, but actual performance does not. The mean direction discrimination for three human observers was 4.3 plus or minus 0.9 deg (SD) for theta = 60 deg and 5.9 plus or minus 1.2 for theta = 30 deg. This discrepancy can be resolved in two ways. If the noises in the internal representations of the component speeds are smaller than the available estimates or if these noises are not independent, then the psychophysical results are consistent with the Adelson-Movshon hypothesis

    Effect of contrast on human speed perception

    Get PDF
    This study is part of an ongoing collaborative research effort between the Life Science and Human Factors Divisions at NASA ARC to measure the accuracy of human motion perception in order to predict potential errors in human perception/performance and to facilitate the design of display systems that minimize the effects of such deficits. The study describes how contrast manipulations can produce significant errors in human speed perception. Specifically, when two simultaneously presented parallel gratings are moving at the same speed within stationary windows, the lower-contrast grating appears to move more slowly. This contrast-induced misperception of relative speed is evident across a wide range of contrasts (2.5-50 percent) and does not appear to saturate (e.g., a 50 percent contrast grating appears slower than a 70 percent contrast grating moving at the same speed). The misperception is large: a 70 percent contrast grating must, on average, be slowed by 35 percent to match a 10 percent contrast grating moving at 2 deg/sec (N = 6). Furthermore, it is largely independent of the absolute contrast level and is a quasilinear function of log contrast ratio. A preliminary parametric study shows that, although spatial frequency has little effect, the relative orientation of the two gratings is important. Finally, the effect depends on the temporal presentation of the stimuli: the effects of contrast on perceived speed appears lessened when the stimuli to be matched are presented sequentially. These data constrain both physiological models of visual cortex and models of human performance. We conclude that viewing conditions that effect contrast, such as fog, may cause significant errors in speed judgments

    Efficient use of bit planes in the generation of motion stimuli

    Get PDF
    The production of animated motion sequences on computer-controlled display systems presents a technical problem because large images cannot be transferred from disk storage to image memory at conventional frame rates. A technique is described in which a single base image can be used to generate a broad class of motion stimuli without the need for such memory transfers. This technique was applied to the generation of drifting sine-wave gratings (and by extension, sine wave plaids). For each drifting grating, sine and cosine spatial phase components are first reduced to 1 bit/pixel using a digital halftoning technique. The resulting pairs of 1-bit images are then loaded into pairs of bit planes of the display memory. To animate the patterns, the display hardware's color lookup table is modified on a frame-by-frame basis; for each frame the lookup table is set to display a weighted sum of the spatial sine and cosine phase components. Because the contrasts and temporal frequencies of the various components are mutually independent in each frame, the sine and cosine components can be counterphase modulated in temporal quadrature, yielding a single drifting grating. Using additional bit planes, multiple drifting gratings can be combined to form sine-wave plaid patterns. A large number of resultant plaid motions can be produced from a single image file because the temporal frequencies of all the components can be varied independently. For a graphics device having 8 bits/pixel, up to four drifting gratings may be combined, each having independently variable contrast and speed

    Combining Speed Information Across Space

    Get PDF
    We used speed discrimination tasks to measure the ability of observers to combine speed information from multiple stimuli distributed across space. We compared speed discrimination thresholds in a classical discrimination paradigm to those in an uncertainty/search paradigm. Thresholds were measured using a temporal two-interval forced-choice design. In the discrimination paradigm, the n gratings in each interval all moved at the same speed and observers were asked to choose the interval with the faster gratings. Discrimination thresholds for this paradigm decreased as the number of gratings increased. This decrease was not due to increasing the effective stimulus area as a control experiment that increased the area of a single grating did not show a similar improvement in thresholds. Adding independent speed noise to each of the n gratings caused thresholds to decrease at a rate similar to the original no-noise case, consistent with observers combining an independent sample of speed from each grating in both the added- and no-noise cases. In the search paradigm, observers were asked to choose the interval in which one of the n gratings moved faster. Thresholds in this case increased with the number of gratings, behavior traditionally attributed to an input bottleneck. However, results from the discrimination paradigm showed that the increase was not due to observers' inability to process these gratings. We have also shown that the opposite trends of the data in the two paradigms can be predicted by a decision theory model that combines independent samples of speed information across space. This demonstrates that models typically used in classical detection and discrimination paradigms are also applicable to search paradigms. As our model does not distinguish between samples in space and time, it predicts that discrimination performance should be the same regardless of whether the gratings are presented in two spatial intervals or two temporal intervals. Our last experiment largely confirmed this prediction

    Combining Speed Information Across Space

    Get PDF
    We used speed discrimination tasks to measure the ability of observers to combine speed information from multiple stimuli distributed across space. We compared speed discrimination thresholds in a classical discrimination paradigm to those in an uncertainty/search paradigm. Thresholds were measured using a temporal two-interval forced-choice design. In the discrimination paradigm, the n gratings in each interval all moved at the same speed and observers were asked to choose the interval with the faster gratings. Discrimination thresholds for this paradigm decreased as the number of gratings increased. This decrease was not due to increasing the effective stimulus area as a control experiment that increased the area of a single grating did not show a similar improvement in thresholds. Adding independent speed noise to each of the n gratings caused thresholds to decrease at a rate similar to the original no-noise case, consistent with observers combining an independent sample of speed from each grating in both the added- and no-noise cases. In the search paradigm, observers were asked to choose the interval in which one of the n gratings moved faster. Thresholds in this case increased with the number of gratings, behavior traditionally attributed to an input bottleneck. However, results from the discrimination paradigm showed that the increase was not due to observers' inability to process these gratings. We have also shown that the opposite trends of the data in the two paradigms can be predicted by a decision theory model that combines independent samples of speed information across space. This demonstrates that models typically used in classical detection and discrimination paradigms are also applicable to search paradigms. As our model does not distinguish between samples in space and time, it predicts that discrimination performance should be the same regardless of whether the gratings are presented in two spatial intervals or two temporal intervals. Our last experiment largely confirmed this prediction

    Direct Relationship Between Perceptual and Motor Variability

    Get PDF
    The time that elapses between stimulus onset and the onset of a saccadic eye movement is longer and more variable than can be explained by neural transmission times and synaptic delays (Carpenter, 1981, in: Eye Movements: Cognition & Visual Perception, Earlbaum). In theory, noise underlying response-time (RT) variability could arise at any point along the sensorimotor cascade, from sensory noise arising Vvithin the early visual processing shared Vvith perception to noise in the motor criterion or commands necessary to trigger movements. These two loci for internal noise can be distinguished empirically; sensory internal noise predicts that response time Vvill correlate Vvith perceived stimulus magnitude whereas motor internal noise predicts no such correlation. Methods. We used the data described by Liston and Stone (2008, JNS 28:13866-13875), in which subjects performed a 2AFC saccadic brightness discrimination task and the perceived brightness of the chosen stimulus was then quantified in a second 21FC perceptual task. Results. We binned each subject's data into quartiles for both signal strength (from dimmest to brightest) and RT (from slowest to fastest) and analyzed the trends in perceived brightness. We found significant effects of both signal strength (as expected) and RT on normalized perceived brightness (both p less than 0.0001, 2-way ANOVA), without significant interaction (p = 0.95, 2-way ANOVA). A plot of normalized perceived brightness versus normalized RT show's that more than half of the variance was shared (r2 = 0.56, P less than 0.0001). To rule out any possibility that some signal-strength related artifact was generating this effect, we ran a control analysis on pairs of trials with repeated presentations of identical stimuli and found that stimuli are perceived to be brighter on trials with faster saccades (p less than 0.001, paired t-test across subjects). Conclusion. These data show that shared early visual internal noise jitters perceived brightness and the saccadic motor output in parallel. While the present correlation could theoretically result, either directly or indirectly, from some low-level brainstem or retinal mechanism (e.g., arousal, pupil size, photoreceptor noise) that influences both visual and oculomotor circuits, this is unlikely given the earlier fin ding that the variability in perceived motion direction and smooth-pursuit motor output is highly correlated (Stone and Krauzlis, 2003, JOV 3:725-736), suggesting that cortical circuits contribute to the shared internal noise

    Comprehensive Oculomotor Behavioral Response Assessment (COBRA)

    Get PDF
    An eye movement-based methodology and assessment tool may be used to quantify many aspects of human dynamic visual processing using a relatively simple and short oculomotor task, noninvasive video-based eye tracking, and validated oculometric analysis techniques. By examining the eye movement responses to a task including a radially-organized appropriately randomized sequence of Rashbass-like step-ramp pursuit-tracking trials, distinct performance measurements may be generated that may be associated with, for example, pursuit initiation (e.g., latency and open-loop pursuit acceleration), steady-state tracking (e.g., gain, catch-up saccade amplitude, and the proportion of the steady-state response consisting of smooth movement), direction tuning (e.g., oblique effect amplitude, horizontal-vertical asymmetry, and direction noise), and speed tuning (e.g., speed responsiveness and noise). This quantitative approach may provide fast and results (e.g., a multi-dimensional set of oculometrics and a single scalar impairment index) that can be interpreted by one without a high degree of scientific sophistication or extensive training

    On the Visual Input Driving Human Smooth-Pursuit Eye Movements

    Get PDF
    Current computational models of smooth-pursuit eye movements assume that the primary visual input is local retinal-image motion (often referred to as retinal slip). However, we show that humans can pursue object motion with considerable accuracy, even in the presence of conflicting local image motion. This finding indicates that the visual cortical area(s) controlling pursuit must be able to perform a spatio-temporal integration of local image motion into a signal related to object motion. We also provide evidence that the object-motion signal that drives pursuit is related to the signal that supports perception. We conclude that current models of pursuit should be modified to include a visual input that encodes perceived object motion and not merely retinal image motion. Finally, our findings suggest that the measurement of eye movements can be used to monitor visual perception, with particular value in applied settings as this non-intrusive approach would not require interrupting ongoing work or training

    Effects of Vibration and G-Loading on Heart Rate, Breathing Rate, and Response Time

    Get PDF
    Aerospace and applied environments commonly expose pilots and astronauts to G-loading and vibration, alone and in combination, with well-known sensorimotor (Cohen, 1970) and performance consequences (Adelstein et al., 2008). Physiological variables such as heart rate (HR) and breathing rate (BR) have been shown to increase with G-loading (Yajima et al., 1994) and vibration (e.g. Guignard, 1965, 1985) alone. To examine the effects of G-loading and vibration, alone and in combination, we measured heart rate and breathing rate under aerospace-relevant conditions (G-loads of 1 Gx and 3.8 Gx; vibration of 0.5 gx at 8, 12, and 16 Hz)
    corecore